Multiple Sclerosis and Related Disorders
○ Elsevier BV
Preprints posted in the last 7 days, ranked by how well they match Multiple Sclerosis and Related Disorders's content profile, based on 15 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit.
Jansen, C.; Stalter, J.; Reuter, S.; Witt, K.
Show abstract
BackgroundAccelerated long-term forgetting (ALF), defined as an increased rate of memory loss over extended intervals, has so far been detected in a pilot study of patients with mild multiple sclerosis (MS). This study aimed to (I) confirm the presence of ALF in a larger, heterogeneous MS sample, (II) explore associations with patient-reported outcomes, and (III) assess the diagnostic performance of ALF tests for subjective memory impairment. MethodsThis study compared 62 MS patients and 65 age-, sex-, and education-matched healthy controls using standardized memory tests (RAVLT, WMS-IV Logical Memory subtest). Recall was assessed immediately, after 30 minutes, and after 7 days. Seven-day/30-minute recall ratios (QRAVLT, QWMS) served as primary outcomes. Self-report measures included memory complaints, fatigue, depression, and sleep disturbances. Linear regression and Receiver operating characteristic (ROC) analyses assessed predictors and diagnostic accuracy. ResultsALF was observed in multiple sclerosis since QRAVLT was lower in patients than in controls (0.64 [95% CI 0.59-0.69] vs. 0.78 [0.73-0.82], p < 0.001), as was QWMS (0.79 [95% CI 0.74-0.84] vs. 0.95 [0.90-1.00], p < 0.001), despite comparable initial learning. Greater fatigue, higher memory complaints, longer disease duration, older age, and greater disability were associated with lower ALF scores. The combined ALF score moderately discriminated subjective memory impairment (AUC 0.74; sensitivity 0.73; specificity 0.73). ConclusionMS patients showed ALF despite normal initial learning, indicating a specific memory deficit undetected by standard tests. Long-delay recall using RAVLT and WMS-IV Logical Memory subtest may improve cognitive impairment detection in MS.
Bovis, F.; Montobbio, N.; Signori, A.; Kalincik, T.; Arnold, D. L.; Tintore, M.; Kappos, L.; Sormani, M. P.
Show abstract
Disability worsening is the critical long-term outcome in multiple sclerosis, yet the Expanded Disability Status Scale incompletely captures neurological deterioration and has limited sensitivity in the short time windows of clinical trials. Composite endpoints incorporating functional measures have been proposed to address these limitations, but whether they reliably improve detection of treatment effects has not been established across trials. We conducted a post-hoc analysis of individual patient data from ten phase III randomised controlled trials (ASCEND, BRAVO, CONFIRM, DEFINE, EXPAND, INFORMS, OLYMPUS, OPERA I/II, and ORATORIO; n = 9,369), spanning relapsing-remitting and progressive multiple sclerosis. Confirmed disability worsening was defined using harmonised criteria with the msprog package and confirmed at 24 weeks. Treatment effects were estimated using Cox proportional hazards models and combined across trials in a one-stage individual patient data framework. Composite endpoints were constructed from the Expanded Disability Status Scale, the timed 25-foot walk test, and the nine-hole peg test using logical unions (OR-type), intersections (AND-type), and majority-vote structures. Sensitivity to treatment effect was quantified using Z-scores (the ratio of the pooled log-hazard ratio to its standard error) and compared to the Expanded Disability Status Scale reference using interaction tests. Event rates varied across components: the timed walk test generated the highest rates (up to 46.8%) while the nine-hole peg test generated the lowest (as low as 2.1%). OR-type composite endpoints showed weaker treatment effects than the Expanded Disability Status Scale alone, with the largest reductions in sensitivity observed for endpoints incorporating the timed walk test ({Delta}Z up to +2.26; interaction p = 0.004). These findings were confirmed across disease subtypes and were pronounced in relapsing-remitting trials, where no composite endpoint outperformed the Expanded Disability Status Scale. In progressive multiple sclerosis, the combination of the Expanded Disability Status Scale and the nine-hole peg test showed numerically stronger treatment effects ({Delta}Z = -1.65), though interaction tests did not reach statistical significance (p = 0.051). Composite endpoints do not systematically improve treatment effect detection in multiple sclerosis trials. Increased event capture driven by the timed walk test introduces noise that dilutes the treatment signal rather than amplifying it, highlighting that event rate and endpoint quality are not interchangeable. Upper limb function assessed by the nine-hole peg test provides complementary and specific information, particularly in progressive disease. The combination of global disability and upper limb measures represents a promising direction for future endpoint development in progressive multiple sclerosis trials, warranting validation.
Graure, M.; Nierobisch, N.; De Vere-Tyndall, A. J.; Pakeerathan, T.; Ayzenberg, I.; Gernert, J.; Havla, J.; Ringelstein, M.; Aktas, O.; Tkachenko, D.; Huemmert, M.; Trebst, C.; Cedra Fuertes, N. A.; Papadopoulou, A.; Giglhuber, K.; Wicklein, R.; Berthele, A.; Weller, M.; Kana, V.; Roth, P.; Herwerth, M.
Show abstract
BackgroundChronic relapsing inflammatory optic neuropathy (CRION) is a steroid-dependent form of optic neuritis with incompletely understood pathophysiology. The identification of myelin oligodendrocyte glycoprotein antibodies (MOG-IgG) in a substantial patient subset has challenged the diagnostic and therapeutic management. The aim of this study was to investigate clinical profiles and treatment outcomes of patients with CRION, comparing MOG-IgG-positive (MOG+) and seronegative (MOG-) subgroups. MethodsPatients from six European tertiary centers fulfilling diagnostic criteria for CRION were included. All underwent cell-based autoantibody testing. Clinical outcomes (visual acuity, annualized relapse rate), laboratory and imaging findings (MRI, OCT), and treatment responses were retrospectively analyzed. ResultsSixty patients were included (median age 33 years; 70% female); 27 (45%) were MOG+. MOG+ CRION was associated with later onset, higher ARR before treatment (median [IQR] 2 [1-3] vs. 1 [1-2], p = 0.023), and a trend toward shorter inter-relapse intervals. Additional distinguishing features included higher frequencies of antinuclear antibody positivity, elevated CSF interleukin-6, and extensive optic neuritis on MRI. Relapse burden correlated with visual acuity decline and retinal thinning. In MOG+ patients, monoclonal antibody therapy reduced the ARR (n = 21; 2 [1-3] vs. 0 [0-2], p = 0.024), primarily driven by tocilizumab (n = 11; 2 [1-3] vs. 0 [0-1], p = 0.023). In MOG-patients, rituximab and azathioprine showed a trend toward ARR reduction. ConclusionCRION represents a heterogeneous syndrome encompassing distinct subgroups. MOG+ patients demonstrate higher disease activity but respond favorably to tocilizumab. Serological testing is critical for treatment stratification and preventing relapses.
Auger, S. D.; Varley, J.; Hargovan, M.; Scott, G.
Show abstract
Background: Current medical large language model (LLM) evaluations largely rely on small collections of cases, whereas rigorous safety testing requires large-scale, diverse, and complex cases with verifiable ground truth. Multiple Sclerosis (MS) provides an ideal evaluation model, with validated diagnostic criteria and numerous paraclinical tests informing differential diagnosis, investigation, and management. Methods: We generated synthetic MS cases with ground-truth labels for diagnosis, localisation, and management. Four frontier LLMs (Gemini 3 Pro/Flash, GPT 5.2/5 mini) were instructed to analyse cases to provide anatomical localisation, differential diagnoses, investigations, and management plans. An automated evaluator compared these outputs to the ground-truth labels. Blinded subspecialty experts validated 70 cases for realism and automated evaluator accuracy. We then evaluated LLM decision-making across 1,000 cases and scaled to 10,000 to characterise rare, catastrophic failures. Results: Subspecialist expert review confirmed 100% synthetic case realism and 99.8% (95% CI 95.5 to 100) automated evaluation accuracy. Across 1,000 generated MS cases, all LLMs successfully included MS in the differential diagnoses for more than 91% cases. However, diagnostic competence did not associate with treatment safety. Gemini 3 models had low rates of clinically appropriate steroid recommendations (Flash: 7.2% 95% CI 5.6 to 8.8; Pro: 15.8% 95% CI 13.6 to 18.1) compared to GPT 5 mini (23.5% 95% CI 20.8 to 26.1), frequently overlooking contraindications like active infection. OpenAI models inappropriately recommended acute intravenous thrombolysis for MS cases (9.6% GPT 5.2; 6.4% GPT 5 mini) compared to below 1% for Gemini models. Expanded evaluation (to 10,000 cases) probed these errors in detail. Thrombolysis was recommended in 10.1% of cases lacking symptom timing information and paradoxically persisted (2.9%) even when symptoms were explicitly documented as more than 14 days old. Conclusion: Automated expert-level evaluation across 10,000 cases characterised artificial intelligence clinical blind spots hitherto invisible to small-scale testing. Massive-scale simulation and automated interrogation should become standard for uncovering serious failures and implementing safety guardrails before clinical deployment exposes patients to risk.
Bombaci, A.; Iadarola, A.; Giraudo, A.; Fattori, E.; Sinagra, S.; Magnino, A.; Calvo, A.; Chio', A.; Cicolin, A.
Show abstract
Background: Sleep wake and circadian disturbances are increasingly recognised in people living with amyotrophic lateral sclerosis (plwALS), but endogenous circadian phase timing and its prognostic significance in early disease remain unclear. We assessed whether salivary dim-light melatonin onset (DLMO), an objective marker of central circadian phase, is altered in early plwALS and whether it provides prognostic information. Methods: In this prospective longitudinal observational study, plwALS within 18 months of symptom onset underwent home-based salivary melatonin sampling under dim light conditions at six predefined time points around habitual sleep onset (HSO). Melatonin profiles were modeled using cubic smoothing splines, and DLMO was defined as the first time the fitted curve reached 3 pg/mL. Clinical, respiratory, and sleep assessments were collected at baseline (T0) and after 6 months (T6); a subgroup repeated saliva sampling at T6. Age and sex matched controls underwent melatonin profiling. Associations with disease progression, incident respiratory symptoms, and survival/tracheostomy were examined using regressions and survival analyses. Results: Fifty plwALS were enrolled. Compared with controls, plwALS showed an earlier DLMO (20:24 vs 20:58; p=0.028) despite similar HSO and chronotype. Within ALS cohort, a later baseline DLMO correlated with worse functional/motor status, faster progression of disease, incident dyspnea/orthopnea by T6 (adjusted OR 3.02; p=0.017), and poorer survival/tracheostomy-free outcome. In re-sampled subgroup (n=28), DLMO and other melatonin-derived metrics did not change over 6 months. Conclusions: Circadian phase alterations are detectable in early ALS. Baseline DLMO may represent a non-invasive prognostic biomarker for progression, respiratory symptom emergence and survival, warranting validation in larger multicentre cohorts.
Soler-Saez, I.; Galiana-Rosello, C.; Grillo-Risco, R.; Falony, G.; Tepav?evi?, V.; Vieira Silva, S.; Garcia-Garcia, F.
Show abstract
Biological sex is a key determinant in the onset and progression of multiple diseases. In multiple sclerosis (MS), females exhibit higher disease prevalence, earlier onset, and more pronounced inflammatory activity, whereas males tend to experience a more severe neurodegenerative course, characterized by accelerated central nervous system damage and increased brain atrophy. The gut microbiome has emerged as a critical factor in MS, as its composition can either ameliorate or exacerbate disease progression. In this study, we aimed to identify reproducible sex-associated differences in gut microbial composition across independent cohorts of MS patients. Through a systematic search we identified six independent studies based on 16S rRNA gene sequencing, comprising a total of 337 samples. Despite substantial inter-study variability, sex-associated differences were more pronounced in MS patients than in healthy controls. We identified 11 microbial taxa showing significant sex-associated differences in MS, nine enriched in females and two in males. Notably, the female-enriched taxa Eggerthella and Eisenbergiella were associated with specific MS subtypes and higher disability. To facilitate the use of our findings by the scientific community, we developed a freely accessible web-based tool that provides full access to our results. Thus, in this work we identified consistent and reproducible sex differences in the gut microbiota of MS patients, highlighting the importance of incorporating sex as a critical variable in microbiome research, with potential implications for understanding disease heterogeneity in MS. IMPORTANCEMultiple sclerosis (MS) affects females and males differently, but the biological reasons behind these differences are not fully understood. One potential factor is the gut microbiome (i.e., the community of microorganisms living in our intestines) which can influence immune function and disease progression. In this study, we analyzed data from multiple independent cohorts and found consistent differences in gut microbial composition between female and male MS patients. Notably, certain bacteria were more abundant in females and were linked to more severe disease features. We also developed a freely accessible web tool where researchers can explore the complete findings in detail. Our results highlight the importance of considering sex as a key factor in microbiome research and may help guide more personalized approaches to understanding and treating MS.
Kmiecik, M. J.; O'Brien, L.; Szpyhulsky, M.; Iodice, V.; Freeman, R.; Jordan, J.; Biaggioni, I.; Kaufmann, H.; Vickery, R.; Miller, A.; Saunders, E.; Rushton, E.; Valle, L.; Norcliffe-Kaufmann, L.
Show abstract
BackgroundAlthough neurogenic orthostatic hypotension (nOH) is a common and debilitating feature of multiple system atrophy (MSA), little is known about the burden of symptoms in the real world. ObjectivesTo design and conduct a cross-sectional community-based research survey targeting patients with MSA, with and without nOH. MethodsWe recruited patients with MSA to complete an anonymous online survey covering three core themes: 1) timely diagnosis, 2) nOH pharmacotherapy and refractory symptoms, and 3) confidence in physician knowledge. Responses were grouped by pre-specified diagnostic certainty levels. Relationships between symptoms, function, and pharmacotherapy were assessed using univariate and multivariate methods. ResultsWe analyzed 259 respondents with a self-reported diagnosis of MSA (age: M=64.38, SD=8.09 years; 44% female). In total, 42% also had a diagnosis nOH; 40% had symptoms highly suspicious of nOH, but no diagnosis; and 21% reported having never had their blood pressure measured in the standing position at a clinical visit. Treatment with a pressor agent was independently associated with the presence of other symptoms of autonomic failure. Each additional nOH symptom reported increased the odds of requiring pharmacotherapy by 18%. Yet, despite anti-hypotensive medication use, 97% of patients reported limitations in their ability to bathe, cook, or arise from a chair/bed with 76% needing caregiver support for refractory nOH symptoms. ConclusionsThis cross-sectional representative sample shows nOH is underrecognized and undertreated in MSA patients, leading to substantial functional limitations. It is our hope that these findings are leveraged for planning future trials and advocating for better treatments.
Obasohan, P. E.; Palmer, J.; Alderson, D.; Yu, D.; Gronne, D. T.; Roos, E. M.; Skou, S. T.; Peat, G. M.
Show abstract
ObjectiveUnlike several other fields of healthcare, little is known about the size of therapist effects on patient outcomes following rehabilitation for musculoskeletal conditions. We aimed to estimate the proportion of variance in patient outcomes from a structured rehabilitation program explained by therapist effects. MethodsFor our observational cohort study we accessed data from the national multicentre Good Life with osteoArthritis in Denmark (GLA:D) osteoarthritis management program. Analyses included 23,021 consecutive eligible adults with hip or knee osteoarthritis (mean (SD) age 65.0 (9.8) years, 71% female) treated by 657 therapists between October 2014 and February 2019. The primary outcome was [≥]30% reduction in pain intensity on 0-100 VAS at 3 months. Therapist effects were estimated as the variance partition coefficient (intra-class correlation coefficient (ICC)) from two-level random intercept logistic regression models before and after adjusting for patient-level case-mix factors and therapist-level characteristics (number of patients treated, days since therapist certification). Analyses were repeated for a range of secondary outcomes using multiply imputed data and complete-case analysis. Results52% of patients reported a [≥]30% reduction in pain intensity on 0-100 VAS at 3 months. In the null model the ICC was 0.007 (95%CI: 0.005, 0.009), which changed little after adjusting for patient- and therapist-level covariates. Upper confidence limits for ICC estimates across all secondary outcomes in multiply imputed and complete case analyses were less than 0.03. ConclusionsIn a nationally implemented osteoarthritis management program delivered by trained healthcare professionals, therapist effects made a minimal contribution to variation in patient outcomes. KEY MESSAGESO_ST_ABSWhat is already known on this topicC_ST_ABS Therapist effects - defined as the effect of a given therapist on patient outcomes as compared to another therapist - have been observed in several fields of healthcare and have important consequences for selection, training, and service improvement. In musculoskeletal rehabilitation five previous studies suggest that 1-12% of variation in patient-reported outcomes may be attributable to therapist effects, but these estimates were based on relatively small datasets resulting in substantial uncertainty. What this study addsOur cohort study analysed registry data from 2014-2019 on 23,021 patients and 647 trained therapists from the nationally implemented GLA:D structured osteoarthritis management program in Denmark. We found that therapist effects accounted for less than 3% of total variation in patient-reported pain and quality of life outcomes 3 months after beginning the program How this study might affect research, practice, or policyOur findings suggest that contextual factors that relate to therapist effects - therapist characteristics or therapist-patient interaction and alliance - make a minimal contribution to variation in patient outcomes from this structured, group-based rehabilitation intervention. Any contextual effects must be attributable to alternative sources, e.g. patient expectations, intervention setting.
Polo Sanchez, M.; Lesmes, A. C.; Muni, N.; Vigneault, F.; Novak, R.
Show abstract
Background: Rett Syndrome (RTT) is a severe neurodevelopmental disorder affecting approximately 1 in 10,000 live female births worldwide. The Rett Syndrome Behaviour Questionnaire (RSBQ), remains one of the most widely used standardized behavioral assessment tools for RTT. However, the RSBQ was originally validated only in British English, limiting its applicability for Spanish-speaking caregivers and clinical centers across Latin America and Spain. Objective: The primary aim of this study was to develop and validate the comprehension of the Spanish translation of the RSBQ to ensure cultural and linguistic equivalence, enhance data reliability, and facilitate earlier, more accurate clinical assessments among Spanish-speaking RTT populations. Methods: Surveys were administered in two phases to Spanish-speaking caregivers between November 2023 and September 2025. Phase I consisted of 12 guided survey administrations with participants being able to ask clarifying questions and offer linguistic modifications of RSBQ questions. Phase II consisted of independent online administration of the refined Spanish RSBQ and a retest at least 7 days later. Participants were recruited through direct outreach and supported virtually during questionnaire completion. Results: Following data cleaning and quality control, a total of 51 caregivers successfully completed both surveys. The Spanish RSBQ demonstrated high caregiver comprehension and strong engagement across multiple Latin American countries, including Argentina, Mexico, and Peru. Responses were highly correlated between test and retest timepoints, and no question showed biased response distributions. A slight effect of response interval on test-retest correlation was observed, potentially indicating the impact of natural disease progression confounding retest evaluation for long (>80 day) intervals; however this effect did not impact the overall linguistic validation results as analysis of only <21 day test-retest responders confirmed the findings. Conclusions: This linguistic validation study represents the first formal step toward the clinical validation of the Spanish RSBQ, enabling broader inclusion of Spanish-speaking populations in RTT research. The collaborative, bilingual data collection strategy proved both feasible and effective, paving the way for multinational trials and expanding therapeutic accessibility through localized, patient-centered innovation.
Khorsand, B.; Teichrow, D.; Lipton, R. B.; Ezzati, A.
Show abstract
ObjectiveTo describe the design, feasibility, and baseline characteristics of the Migraine Impact on Neurocognitive Dynamics (MIND) study, a 30-day smartphone-based cohort for high-frequency assessment of cognition and symptoms in adults with migraine. BackgroundCognitive symptoms are an important component of migraine burden, but they are difficult to measure using single-visit testing or retrospective questionnaires. Repeated smartphone-based assessment may better capture real-world variability in cognition and symptoms. MethodsAdults meeting International Classification of Headache Disorders, 3rd edition, criteria for migraine were enrolled remotely and completed 30 days of once-daily ecological momentary assessments and mobile cognitive tasks delivered through the Mobile Monitoring of Cognitive Change platform. Baseline measures assessed demographics, migraine characteristics, disability, mood, stress, and treatment patterns. Feasibility was evaluated using enrollment, completion, and retention metrics. ResultsA total of 177 participants enrolled (mean age 38.8 {+/-} 11.9 years; 79.7% female), including 80/177 (45.2%) with chronic migraine. Across the 30-day protocol, 3688 daily assessments were completed, representing 70.8% of all possible study days, and 70.6% of participants completed at least 20 days of monitoring. Completion remained above 60% across study days. At baseline, chronic migraine was associated with greater burden than low-frequency and high-frequency episodic migraine, including higher MIDAS scores (98.6 vs. 38.7 and 70.3), more days with concentration difficulty (16.0 vs. 7.9 and 11.5), and more days with functional interference (18.5 vs. 7.6 and 13.0). ConclusionsThe MIND study demonstrates the feasibility of high-frequency smartphone-based assessment of cognition and symptoms in migraine and provides a methodological foundation for future analyses of within-person cognitive and symptom dynamics across the migraine cycle.
Nagase, M.; Hino, K.; Sakamoto, A.; Seo, M.
Show abstract
Patients with amyotrophic lateral sclerosis (ALS) face critical decisions regarding life-sustaining treatments, such as invasive mechanical ventilation and percutaneous endoscopic gastrostomy. Advance care planning and shared decision-making are standard supportive frameworks but they often fail to account for structural pressures like progressive decline, shifting patient values, and fear of becoming a burden that may influence decision-making. This study explores how patients with ALS interpret ventilator and care options amid progressive physical decline, thereby reconsidering approaches to decision support. Using a qualitative descriptive design, the researcher (a nurse/sociologist) conducted 2-3 hour home interviews with five purposively sampled patients with ALS. Data, including eye-tracking-aided responses, were analysed via Sandelowskis framework. Rigour was ensured through team-based triangulation, independent coding by two researchers, and a reflexive audit trail. Subjective narratives were prioritised without medical record cross-referencing to capture patients experiences. Four categories emerged: (1) Rewriting clinical prognosis into a narrative of exploration via peer models, where meeting active ventilator users transformed future perceptions; (2) The conflict between securing care infrastructure and the burden on family, which greatly influenced the will to survive; (3) Existential fluctuation, where patients intentions shifted with daily fulfilment and family events; and (4) Governance of the body via pre-emptive technology use and training carers as physical extensions. Findings showed decision-making was a multi-layered process redefining lifes meaning within social resources. This necessitate shifting from independent to relational autonomy, where agency relies on care infrastructure, not physical ability. Treatment choice is a dynamic exploration requiring narrative companions to support existential fluctuations. Professionals must coordinate environments to reduce patient indebtedness. Limitations include the small, resource-advantaged sample (N = 5) and reliance on subjective narratives without medical record verification. Living with ALS means governing a new self through relational support and continuous dialogue.
Perry, A. E.; Zawadzka, M.; Rychlik, J.; Hewitt, C.
Show abstract
Objectives: The primary aim of this study was to assess the feasibility of delivering an adapted problem-solving skills (PSS) intervention by quantifying the recruitment, follow-up and completion rates using a brief problem-solving intervention for people with a mental health diagnosis in two Polish prisons. Design: IAPPS is an open, multi-centred, parallel group feasibility randomised controlled trial (RCT). Setting: Two prisons in Poland. Participants: Men in custody aged 18 years and older, having a mental illness and living within the prison therapeutic unit. Interventions: The intervention consisted of an adapted PSS skills intervention plus care as usual (CAU) or care as usual only. Delivered in groups of up to five people in 1.5-hour sessions over the course of two weeks. Main outcome measures: Primary outcomes - rate of recruitment, follow-up, and feasibility to deliver the intervention. Secondary outcomes included measures of depression, general mental health, and coping strategies. Results: 129 male prisoners were screened, 64 were randomly allocated, with a mean age of 53.5 years (SD 14, range 23-84). 59 (95%) prisoners were of Polish origin. Our recruitment rate was 48%. There was differential follow up with those in the intervention group less likely to complete the post-test battery versus those who received care as usual. Outcome measures were successfully collected at both time points. Conclusions We were able to recruit, retain and deliver the intervention within the prison setting; some logistical challenges limited our assessment of intervention engagement. Our data helps to demonstrate how use of the RCT study design can be implemented and delivered within the complex prison environment. Trial registration number ISRCTN 70138247, protocol registration date May 2021
Aravinth, P.; Withanage, N. D.; Senadheera, B. M.; Pathirage, S.; Athiththan, S. P.; Perera, S. L.; Athiththan, L. V.
Show abstract
Background Inflammatory markers play an important role in the pathophysiology of Lumbar disc herniation (LDH). This study presents a comprehensive multi-assessment of the inflammatory landscape by combining serum inflammatory cytokines quantification, their diagnostic performance, associations with radiological features, and integrating the experimental findings into an in-silico protein-protein interaction network. Methods A multifaceted study design was utilized to quantify and compare the distribution of selected inflammatory cytokines in patients with LDH and control subjects. The diagnostic ability of these cytokines was assessed using receiver operating characteristic curve analysis. The cytokines values were correlated with selected radiological findings including disc herniation subtypes (protrusion, extrusion, and sequestration), and further categorized as contained and non-contained in patients using a Spearmans rank correlation test. Additionally, computational analysis was performed to identify the central hubs and functionally enriched pathways. Results In patients with LDH, IL-6 and IL-1{beta} showed statistically significant (IL-6: p < 0.001; IL-1{beta}: p = 0.001) rise, but IL-6 showed high diagnostic and discriminative power (AUC = 0.99; cut-off: 19.99 pg/mL). Further IL-1{beta} exhibited a positive correlation with non-contained disc herniation (extrusion and sequestration), while displaying a significant (p < 0.05) negative correlation with protrusion. In silico analysis identified IL-1{beta}, IL-8, TNF-, IL-6, IL-1, CSF2, CSF3, and IL-10 as central hubs, with IL-1{beta} being the top ranked hub in determining functionally enriched cytokine-cytokine receptor interaction. Conclusions Study confirmed IL-6 as a powerful diagnostic marker for LDH, while IL-1{beta} aids in determining contained and non-contained disc herniation. Further, IL-1{beta} was identified as the central hub, triggering functionally enriched pathways in the pathogenesis of LDH.
Soto-Ferndandez, P.; Toledo-Rodriguez, L.; Figueroa-Vargas, A.; Figueroa-Taiba, P.; Billeke, P.
Show abstract
Background: Cognitive impairment poses a significant challenge to healthcare systems worldwide, impacting patient autonomy, social participation, and quality of life, while placing a considerable burden on caregivers. Non pharmacological interventions, particularly cognitive training and non invasive brain stimulation, have emerged as promising therapeutic strategies. Objective: This study aims to quantify the synergistic effects of transcranial direct current stimulation (tDCS) with cognitive training on cognitive function across a spectrum of pathologies that induce cognitive impairment. Methods: We conducted a systematic review and metaanalysis following PRISMA guidelines. We searched PubMed for randomized controlled trials that investigated the effect of combined tDCS and cognitive training compared with cognitive training alone. The analysis was based on the GRADE framework for systematic reviews and metaanalyses. Results: Across 27 studies including 1,012 participants, tDCS combined with cognitive training showed a small effect compared with cognitive training alone (SMD = 0.36, 95% CI: 0.15 0.56). The effect was found only immediately after the intervention and declined during follow-up. Conclusion: tDCS combined with cognitive training may provide a small, short term benefit for cognitive function, but high heterogeneity across studies and loss of effect at follow up underscore the need for larger, better standardized trials to clarify its clinical value.
Morrin, H.; Badenoch, J. B.; Burchill, E.; Fayosse, A.; Singh-Manoux, A.; Shotbolt, P.; Zandi, M. S.; David, A. S.; Lewis, G.; Rogers, J. P.
Show abstract
Background: Depression is associated with an increased risk of subsequent Parkinson's disease. Neuroimaging studies suggest a neurobiological overlap in mechanisms underlying Parkinson's disease and psychomotor retardation in depression. Our aim was to investigate whether, among individuals with depression, the presence of psychomotor retardation was associated with the development of subsequent Parkinson's disease. Methods: In a retrospective cohort study, electronic healthcare records from individuals diagnosed with depression at age 40 or over in a large mental health service in London, UK were examined for the presence of psychomotor retardation. Linkage to general hospital records was used to ascertain diagnoses of Parkinson's disease between 2007 and 2023. Cox regression was used to compare the hazard of Parkinson's disease in individuals with depression with and without psychomotor retardation. Results: Among 6327 patients with depression, 2402 (38.0%) had psychomotor retardation. The adjusted hazard ratio for development of Parkinson's in those with psychomotor retardation was 1.43 (95% CI 1.02 - 2.01, p = 0.04). Secondary analyses demonstrated a significant difference in psychomotor retardation incidence at least 10 years before Parkinson's diagnosis. Conclusions: Psychomotor retardation in later-life depression is associated with increased risk of subsequent Parkinson's diagnosis over an extended period of time, suggesting that the relationship cannot solely be explained by misdiagnosis. Psychomotor retardation may therefore serve as a marker of prodromal Parkinson's disease.
Saha, S.; Georgiou-Karistianis, N.; Teo, V.; Szmulewicz, D. J.; Strike, L. T.; Franca, M. C.; Rezende, T. J.; Harding, I. H.
Show abstract
Background Friedreich ataxia (FRDA) is a rare neurodegenerative disorder with substantial heterogeneity in clinical presentation and progression, complicating prognosis and trial design. Neuroimaging offers objective biomarkers to track disease evolution, yet variability in progression patterns remains poorly understood. Objective To identify biologically meaningful FRDA progression subtypes using longitudinal multimodal MRI and assess their associations with demographic, genetic, and clinical factors. Methods Longitudinal structural and diffusion MRI data from 54 FRDA and 57 controls were analysed. Annualised progression rates of macrostructural (volumetric) and microstructural (diffusion) features across cerebellum, brainstem, and spinal cord regions were clustered using Gaussian Mixture Models. Cluster robustness was assessed using per-cluster Jaccard similarity and other validation metrics. Random Forest classification examined predictors of cluster membership. Results Three reproducible clusters/subtypes emerged: micro-dominant/dual progression, characterised by widespread microstructural deterioration with modest volumetric decline; macro-dominant, marked by pronounced volumetric decline with minimal microstructural change; and minimal/no progression, showing negligible change in all measures. FRDA participants predominated in the first two clusters. Random Forest prediction of cluster membership using clinical and demographic variables identified length of the trinucleotide repeat expansion in the FXN gene as key predictor. Conclusions Data-driven clustering of longitudinal MRI identified distinct FRDA subtypes with unique co-progression patterns, underscoring genetic burden as a key driver. Recognising such heterogeneity can improve patient stratification, enable personalised monitoring, and guide targeted therapeutic strategies. Future studies should validate these subtypes in larger, more diverse cohorts and integrate additional biomarkers for enhanced precision.
Kurtz, J.; Billot, A.; Falconer, I.; Small, H.; Charidimou, A.; Kiran, S.; Varkanitsa, M.
Show abstract
BackgroundTheory of Mind (ToM) deficits are well-documented in right-hemisphere stroke but remain understudied in post-stroke aphasia. Prior work suggests that performance on tasks assessing ToM may be relatively preserved in aphasia and dissociable from language impairment, but these findings are based largely on small studies. This study examined performance on nonverbal false-belief tasks in post-stroke aphasia, its relationship with aphasia severity, and whether vascular brain health, operationalized using cerebral small vessel disease (CSVD) markers, contributed to variability in performance. MethodsForty-four individuals with aphasia completed two nonverbal belief-reasoning tasks assessing spontaneous perspective-taking and self-perspective inhibition. Task accuracy served as the primary outcome. Linear regression models examined associations between task performance, aphasia severity (Western Aphasia Battery-Revised Aphasia Quotient), and CSVD markers, including white matter hyperintensities, cerebral microbleeds, lacunes and enlarged perivascular spaces in the basal ganglia and centrum semiovale. ResultsPerformance was heterogeneous across tasks, with reduced performance observed in 23% of participants on the Reality-Unknown task and 36% on the Reality-Known task. Aphasia severity was not associated with task accuracy. Greater cerebral microbleed count was associated with lower accuracy on both tasks, while greater basal ganglia enlarged perivascular spaces burden showed a more selective association with lower performance. ConclusionsPerformance on nonverbal false-belief tasks in aphasia is variable and not explained by aphasia severity alone. These findings suggest that apparent ToM-related difficulties in aphasia may be shaped by broader vascular brain health, supporting a more multidimensional framework for interpreting social-cognitive task performance after stroke.
Shireman, J.; Mukherjee, N.; Brackman, K.; Kurtz, N.; Patniak, A.; McCarthy, L.; Gonugunta, N.; Ammanuel, S.; Dey, M.
Show abstract
Objectives: Academic medical institutions are the gatekeepers of the physician workforce and shape the future of medicine by regulating medical school admissions as well as residency training. Although broadly the field of medicine is seeing more representation from traditionally underrepresented groups, the critical decision-making platform of academic medicine continues to be uncharacteristically homogeneous, represented mainly by white males. This is even more pronounced in surgical subspecialties, such as academic neurosurgery. This study aims to quantify this phenomenon, uncover its driving factors, and define opportunities for improvement. Methods: Using a mixed research methodology, academic neurosurgical faculty in the U.S were identified, and their demographic data was collected. An internet search using Google Scholar and Scopus was conducted to determine scholarly activity using number of publications and h-index. Results: We found a significant increase in female faculty in academic neurosurgery within the last decade. Comparing the faculty rank amongst male and female faculty, we found that the majority of female faculty are at the assistant professor level (n=36/79; 45.6%) while male faculty are more at the full professor rank (n=265/582; 45.5%). A similar trend was seen for under-represented minority neurosurgery faculty. Strong scholarly activity corelated with a departmental chair position for male faculty, however, this trend was not true for female faculty. There was a significant difference in the number of publications and h-index in female vs male faculty, but only when including male faculty outliers at the full professor level. Conclusion: Slowly but steadily, academic neurosurgery is making progress towards a more diverse and representative workforce in the U.S that better reflects the patient population. Facilitating timely progression of females and URM neurosurgeons into senior professorship and academic leadership roles will further advance this essential progress.
Graham, T. R.; White, M. G.; Blue, B.; Hartley-Brown, M.; Hunter, B. D.; Huynh, C.; Joseph, N.; Keruakous, A.; Pan, D.; Rudolph, P.; Sawhney, R.; Suvannasankha, A.
Show abstract
PURPOSE: Bispecific antibodies (BsAbs) represent a major advancement in the management of relapsed/refractory multiple myeloma (RRMM), offering high response rates even in heavily pretreated patients. However, their use presents operational, safety, and supportive care complexities that require coordinated care teams, and evolving infrastructure. This manuscript summarizes best practice recommendations for adverse event (AE) management, outpatient operational models, referral pathways, and emerging strategies to optimize long-term tolerability. METHODS: Medlive, A PlatformQ Health Brand, conducted qualitative interviews of academic and community-based clinicians. Discussions focused on BsAb implementation, patient selection and counseling, and AE management. Experts provided recommendations on team-based protocols, transitions of care, and inpatient versus outpatient considerations. RESULTS: Ten hematologists/oncologists (academic n=4; community n=6) described practice patterns, barriers, and perspectives on BsAb use. BsAbs were consistently regarded as highly effective across multiple lines of therapy, particularly for patients without alternatives. Cytokine release syndrome (CRS) was the most common acute toxicity, generally low grade and managed effectively with early tocilizumab, including prophylactic use in outpatient settings. Immune effector cell-associated neurotoxicity syndrome (ICANS) was rare, mild, and best mitigated through early recognition and caregiver support. Infections, largely from BCMA-associated hypogammaglobulinemia, frequently interrupted therapy, necessitating antiviral prophylaxis, pneumocystis jirovecii pneumonia (PJP) prophylaxis, and intravenous immunoglobulin (IVIG). Outpatient step-up dosing is expanding, supported by prophylactic strategies and academic-community collaboration. Timely referral was emphasized to preserving eligibility. Major outpatient challenges included sequencing, infrastructure readiness, and standardized caregiver and staff education. CONCLUSION: Effective community implementation of BsAbs requires multidisciplinary coordination, standardized AE protocols, infection prevention, and infrastructure to support monitoring, referrals, and equitable access. These measures are critical to ensure safe, sustainable integration of bispecific therapies and to optimize patient outcomes.
Houle, T. T.; Lebowitz, A.; Chtay, I.; Patel, T.; McGeary, D. D.; Turner, D. P.
Show abstract
ImportanceMigraine attacks often occur unpredictably, limiting the ability of individuals to initiate timely preventive or preemptive treatment. Short-term probabilistic forecasting of migraine risk could enable more targeted management strategies. ObjectiveTo externally validate the previously developed Headache Prediction Model (HAPRED-I), evaluate an updated continuously learning model (HAPRED-II), and assess the feasibility and short-term safety of delivering individualized probabilistic migraine forecasts directly to patients. Design, Setting, and ParticipantsProspective 8-week cohort study conducted remotely at two academic medical centers in the United States (Massachusetts General Hospital and Wake Forest Health Sciences) between 2015 and 2019. Adults with recurrent migraine or tension-type headache completed twice-daily electronic diaries. A total of 230 participants contributed 23,335 diary entries across 11,862 participant-days of observation. Main Outcomes and MeasuresOccurrence of a headache attack within 24 hours following each evening diary entry. Model performance was evaluated using discrimination (area under the receiver operating characteristic curve [AUC]) and calibration. ResultsExternal validation of HAPRED-I demonstrated modest discrimination (AUC, 0.59; 95% CI, 0.57-0.61) and poor calibration, with predicted probabilities consistently exceeding observed headache risk. In contrast, the continuously updating HAPRED-II model demonstrated progressive improvement in predictive performance as participant-specific data accumulated. Discrimination increased from an AUC of 0.59 (95% CI, 0.57-0.61) during the first 14 days to 0.66 (95% CI, 0.63-0.70) after the first month, accompanied by improved calibration across predicted risk levels. Over the study period, 6999 individualized forecasts were delivered directly to participants. No evidence suggested that receipt of forecasts was associated with increasing headache frequency or worsening predicted headache risk trajectories. Conclusions and RelevanceA static migraine forecasting model demonstrated limited transportability to new individuals. In contrast, models that continuously update within individuals may improve predictive accuracy over time and enable real-time delivery of personalized migraine risk forecasts. Further work incorporating richer physiologic and contextual predictors will likely be necessary before such systems can reliably guide clinical treatment decisions.